与传统的监督学习不同,在许多设置中,只有部分反馈。我们只能遵守所选行动的结果,但不是与其他替代方案相关的反事成员。此类环境包括各种应用,包括定价,在线营销和精密药。关键挑战是观察数据受到系统部署的历史政策的影响,产生了偏置数据分布。我们将此任务作为域适应问题,提出了一种自我训练算法,其在观察数据中为有限的未经验证行动的分类值释放结果,以模拟通过伪标记的随机试验,我们称之为反事实自我训练(CST) 。 CST迭代地赋予伪标签并检测模型。此外,我们显示输入一致性损失可以进一步提高CST性能,这是近伪标签的理论分析中所示的。我们展示了合成和实时数据集在合成和实际数据集上的提出算法的有效性。
translated by 谷歌翻译
我们研究了一个定价设置,其中每个客户都基于客户和/或产品特征提供了一种预测客户对该产品的估值的产品特征。通常只有历史销售记录,我们遵守每个客户是否以规定的价格购买产品,而不是客户的真实估值。因此,数据受到历史销售政策的影响,历史销售政策在没有进行实际实验的可能性的情况下估算未来损失/遗憾的困难/遗憾的损失/遗憾,而是优化诸如收入管理等下游任务的新政策。我们研究如何制定损失功能,该功能可用于直接优化定价策略,而不是通过中间需求估计阶段,这可能在实践中被偏见,因为模型拼写,正常化或校准差。虽然在估值数据可用时提出了现有方法,但我们提出了观察数据设置的损失函数。为实现这一目标,我们将机器学习的想法适应损坏的标签,我们可以考虑每个观察到的客户的结果(购买或不按规定的价格购买),作为客户估值的(已知)概率转变。从这种转变,我们派生了一类合适的无偏损失功能。在此类中,我们识别最小方差估计器,那些对不良需求函数估计的稳健性,并在估计的需求功能有用时提供指导。此外,我们还表明,当应用于我们的上下文定价环境时,在违规评估文学中流行的估计人员在这类损失职能范围内,并且当每个估算师在实践中可能表现良好时,还提供管理层。
translated by 谷歌翻译
当利用Pac-Bayes理论进行风险认证时,通常有必要估计和约束Pac-Bayes后部风险。文献中的许多作品采用了一种方法,需要大量数据集,从而产生高计算成本。该手稿提出了一种非常通用的替代方案,可在数据集大小的顺序上节省计算。
translated by 谷歌翻译
我们研究了对分类器的有限集合的多数投票的概括特性,通过PAC-Bayes理论证明了基于利润的概括界。这些为许多分类任务提供了最先进的保证。我们的中心结果利用了Zantedeschi等人最近研究的Dirichlet后期。[2021]用于培训投票分类器;与这项工作相反,我们的界限适用于通过利润率使用的非随机票。我们的贡献使Schapire等人提出的“边缘理论”的辩论增加了观点。[1998]用于集合分类器的概括。
translated by 谷歌翻译
我们专注于具有单个隐藏层的特定浅神经网络,即具有$ l_2 $ normalistization的数据以及Sigmoid形状的高斯错误函数(“ ERF”)激活或高斯错误线性单元(GELU)激活。对于这些网络,我们通过Pac-Bayesian理论得出了新的泛化界限。与大多数现有的界限不同,它们适用于具有确定性或随机参数的神经网络。当网络接受Mnist和Fashion-Mnist上的香草随机梯度下降训练时,我们的界限在经验上是无效的。
translated by 谷歌翻译
我们使用边缘赋予易于思考Pac-Bayesian界的一般配方,临界成分是我们随机预测集中在某种程度上集中。我们开发的工具直接导致各种分类器的裕度界限,包括线性预测 - 一个类,包括升高和支持向量机 - 单隐藏层神经网络,具有异常\(\ ERF \)激活功能,以及深度释放网络。此外,我们延伸到部分易碎的预测器,其中只去除一些随机性,让我们延伸到我们预测器的浓度特性否则差的情况。
translated by 谷歌翻译
深度神经网络在图像分类中Excel Excel,但它们对输入扰动的性能比人类感知更强。在这项工作中,我们可以通过在深卷积网络中纳入脑激发的经常性动态来探讨此缺点是否可以部分地解决。我们从神经科学的一个受欢迎的框架中获取灵感:“预测编码”。在分层模型的每层,生成反馈'预测'(即,重建)前一层中的活动模式。重建错误用于迭代地更新时间间隔内的网络的表示,并通过自然图像数据集来优化网络的反馈权重 - 一种无监督的培训形式。我们展示将此策略实施到两个流行的网络中,VGG16和高效网络,从而提高了对各种损坏和对抗的攻击的鲁棒性。我们假设其他前馈网络可以类似地受益于所提出的框架。为了在这种方向上促进研究,我们提供称为PRIGEIFY的基于开放的Pytorch的包,其可用于实施和研究预测编码动态在任何卷积神经网络中的影响。
translated by 谷歌翻译
With the advent of Neural Style Transfer (NST), stylizing an image has become quite popular. A convenient way for extending stylization techniques to videos is by applying them on a per-frame basis. However, such per-frame application usually lacks temporal-consistency expressed by undesirable flickering artifacts. Most of the existing approaches for enforcing temporal-consistency suffers from one or more of the following drawbacks. They (1) are only suitable for a limited range of stylization techniques, (2) can only be applied in an offline fashion requiring the complete video as input, (3) cannot provide consistency for the task of stylization, or (4) do not provide interactive consistency-control. Note that existing consistent video-filtering approaches aim to completely remove flickering artifacts and thus do not respect any specific consistency-control aspect. For stylization tasks, however, consistency-control is an essential requirement where a certain amount of flickering can add to the artistic look and feel. Moreover, making this control interactive is paramount from a usability perspective. To achieve the above requirements, we propose an approach that can stylize video streams while providing interactive consistency-control. Apart from stylization, our approach also supports various other image processing filters. For achieving interactive performance, we develop a lite optical-flow network that operates at 80 Frames per second (FPS) on desktop systems with sufficient accuracy. We show that the final consistent video-output using our flow network is comparable to that being obtained using state-of-the-art optical-flow network. Further, we employ an adaptive combination of local and global consistent features and enable interactive selection between the two. By objective and subjective evaluation, we show that our method is superior to state-of-the-art approaches.
translated by 谷歌翻译
In this work, we address the problem of unsupervised moving object segmentation (MOS) in 4D LiDAR data recorded from a stationary sensor, where no ground truth annotations are involved. Deep learning-based state-of-the-art methods for LiDAR MOS strongly depend on annotated ground truth data, which is expensive to obtain and scarce in existence. To close this gap in the stationary setting, we propose a novel 4D LiDAR representation based on multivariate time series that relaxes the problem of unsupervised MOS to a time series clustering problem. More specifically, we propose modeling the change in occupancy of a voxel by a multivariate occupancy time series (MOTS), which captures spatio-temporal occupancy changes on the voxel level and its surrounding neighborhood. To perform unsupervised MOS, we train a neural network in a self-supervised manner to encode MOTS into voxel-level feature representations, which can be partitioned by a clustering algorithm into moving or stationary. Experiments on stationary scenes from the Raw KITTI dataset show that our fully unsupervised approach achieves performance that is comparable to that of supervised state-of-the-art approaches.
translated by 谷歌翻译
Implicit Neural Representations (INR) have recently shown to be powerful tool for high-quality video compression. However, existing works are limiting as they do not explicitly exploit the temporal redundancy in videos, leading to a long encoding time. Additionally, these methods have fixed architectures which do not scale to longer videos or higher resolutions. To address these issues, we propose NIRVANA, which treats videos as groups of frames and fits separate networks to each group performing patch-wise prediction. This design shares computation within each group, in the spatial and temporal dimensions, resulting in reduced encoding time of the video. The video representation is modeled autoregressively, with networks fit on a current group initialized using weights from the previous group's model. To further enhance efficiency, we perform quantization of the network parameters during training, requiring no post-hoc pruning or quantization. When compared with previous works on the benchmark UVG dataset, NIRVANA improves encoding quality from 37.36 to 37.70 (in terms of PSNR) and the encoding speed by 12X, while maintaining the same compression rate. In contrast to prior video INR works which struggle with larger resolution and longer videos, we show that our algorithm is highly flexible and scales naturally due to its patch-wise and autoregressive designs. Moreover, our method achieves variable bitrate compression by adapting to videos with varying inter-frame motion. NIRVANA achieves 6X decoding speed and scales well with more GPUs, making it practical for various deployment scenarios.
translated by 谷歌翻译